Leveraging Permitting and Zoning Data to Predict Upzoning Pressure in Philadelphia
Author
Laura Frances and Nissim Lebovits
Published
December 13, 2023
This model and web application prototype were developed for MUSA508, a Master of Urban Spatial Analytics class focused on predictive public policy analytics at the University of Pennsylvania.
1 Background
Growth is critical for a city to continue to densify and modernize. The benefits of growth range from increased public transit use to updating the built environment to be more climate resilient. Growth fuels development and vice versa. Philadelphia is 6th largest city in the US, yet ranks 42nd in cost of living, and so growth is often met with concern. Many residents and preservationists ask: Will growth deteriorate the city’s best features? Will modernization make the city unaffordable to longtime residents?
Balancing growth with affordability is a precarious task for Philadelphia. To date, politicians favor making exceptions for developers parcel-by-parcel rather than championing a citywide smart growth strategy. Zoning advocates need better data-driven tools to broadcast the benefits of a smart growth approach, a planning framework that aims to maximize walkability and transit use to avoid sprawl, that also demonstrates how parcel-by-parcel, or spot zoning, creates unmet development pressure that can drive costs. Designed to support smart growth advocacy, SmartZoning is a prototype web tool that identifies parcels under development pressure with conflicting zoning. Users can strategically leverage the tool to promote proactive upzoning of high-priority parcels, aligning current zoning more closely with anticipated development. This approach aims to foster affordable housing in Philadelphia, addressing one of the city’s most pressing challenges.
Smart Growth meets SmartZoning®
2 Overview
Below, we outline how we developed a predictive model that effectively forecasts future annual development patterns with a low mean absolute error. This model is the basis of SmartZoning: by anticipating future development, it can highly where current zoning may hinder smart growth. We also consider the relationship between development pressure and race, income, and housing cost burden, demonstrating the generalizability of our model across different socioeconomic contexts and investigation the impacts of development locally and city-wide.
This study leverages open data sources including permit counts, council district boundaries, racial mix, median income, housing cost burden to holistically understand what drives development pressure. Generally, data is collected at the block group or parcel level and aggregated up to the council district to capture both local and more citywide trends.
Permit data from 2013 through 2023, collected from the Philadelphia Department of Licenses & Inspections, are the basis of our model. We consider only new construction permits granted for residential projects, but in the future, filtering for data on “full” or “substantial” renovations could add nuance to the compelexities of development pressure. Given the granular spatial scale of our analysis, and the need to aggregate Census data to our unit of analysis, we chose to aggregate these permits data to the block group level.
We note a significant uptick in new construction permits as we approach 2021, followed by a sharp decline. It is generally acknowledged that this trend was due to the expiration of a tax abatement program for developers.
When assessing new construction permit count by Council Districts, a few districts issued the bulk of new permits during that 2021 peak. Hover over the lines to see more about the volume of permits and who granted them.
Show the code
perms_x_dist <-st_join(building_permits, council_dists)perms_x_dist_sum <- perms_x_dist %>%st_drop_geometry() %>%group_by(DISTRICT, year) %>%summarize(permits_count =n())perms_x_dist_mean = perms_x_dist_sum %>%group_by(year) %>%summarize(permits_count =mean(permits_count)) %>%mutate(DISTRICT ="Average")perms_x_dist_sum <-bind_rows(perms_x_dist_sum, perms_x_dist_mean) %>%mutate(color =ifelse(DISTRICT !="Average", 0, 1))ggplotly(ggplot(perms_x_dist_sum %>%filter(year >2013, year <2024), aes(x = year, y = permits_count, color =as.character(color), group =interaction(DISTRICT, color))) +geom_line(lwd =0.7) +labs(title ="Permits per Year by Council District",y ="Total Permits") +# facet_wrap(~DISTRICT) +theme_minimal() +theme(axis.title.x =element_blank(),legend.position ="none") +scale_color_manual(values =c(palette[5], palette[1])))
3.2 Spatio-Temporal Features
New construction exhibits sizable spatial and temporal autocorrelation. In other words, there is a strong relationship between the number of permits in a given block group and the number of permits in neighboring block groups; as well as between the number of permits issued in a block group in a given year and the number of permits issued in that same block group in the previous year. To account for these relationships, we engineer new features, including both space and time lags. We note that all of these engineered features have strong correlation coefficients with our dependent variable, permits_count, and p-values indicating that these relationships are statistically significant.
Socioeconomic factors such as race, income, and housing cost burden play an outsized role in affordability in cities like Philadelphia, which are marred by a pervasive and persistent history housing discrimination and systemic disinvestment in poor and minority neighborhoods. To account for these issues, we incorporate various data from the US Census Bureau’s American Community Survey 5-Year survey. Later, we also consider our model’s generalizability across different racial and economic contexts to ensure that it will not inadvertently reinforce structural inequity.
Spatially, is clear that non-white communities earn lower median incomes and experience higher rates of extreme rent burden (household spends more than 35% of income on gross rent).
“All the complaints about City zoning regulations really boil down to the fact that City Council has suppressed infill housing or restricted multi-family uses, which has served to push average housing costs higher.” - Jon Geeting, Philly 3.0 Engagement Director
SmartZoning® seeks to predict where permits are most likely to be filed as a measure to predict urban growth. As discussed, predicting growth is fraught because growth is influenced by political forces rather than by plans published by the city’s Planning Commission. Comprehensive plans, typically set on ten-year timelines, tend to become mere suggestions, ultimately subject to the prerogatives of city council members rather than serving as steadfast guides for smart growth. With these dynamics in mind, SmartZoning’s prediction model accounts for socioeconomics, council district, and time-space lag.
4.1 Tests for Correlation and Collinearity
4.1.1 Correlation Coefficients
In building our model, we aim to select variables that correlate significantly with permit_count. Using a correlation matrix, we can assess whether our predictors are, in fact, meaningfully associated with our dependent variable. As it turns out, socioeconomic variables are not (we exclude the other variables, which we have previously established to be significant), but we retain them for the sake of later analysis.
We also aim to minimize or eliminate multicollinearity in our model. For this purpose, we evaluate the variance inflation factor (VIF) of a given predictor. The table below lists the VIF of all of our predictors; we exclude any with a VIF of 5 or more from our final model, including district, which is council district, and several historic district and planning overlays.
Notably, permit count does not have a particularly strong correlation to any of our selected variables. This may lead one to the conclusion that permits are evenly distributed throughout the city. However, as we can see below, there are few block groups with more 50 permits. This indicates that permits are granted on a block by block across all districts. The need for SmartZoning is applicable for most Philadelphia neighborhoods, not just a select few.
Show the code
ggplot(permits_bg %>% st_drop_geometry %>%filter(!year %in%c(2024)), aes(x = permits_count)) +geom_histogram(fill = palette[1], color =NA, alpha =0.7) +labs(title ="Permits per Block Group per Year",subtitle ="Log-Transformed",y ="Count") +scale_x_log10() +facet_wrap(~year) +theme_minimal() +theme(axis.title.x =element_blank())
4.2 Spatial Patterns
In addition to correlation between non-spatial variables, our dependent variable, permits_count, displays a high degree of spatial autocorrelation. That is, the number of permits at a given location is closely related to the number of permits at neighboring locations. We’ve accounted for this in our model by factoring in spatial lag, and we explore it here by evaluating the local Moran’s I values, which is the measure of how concentrated high or low values are at a given location. Here, we identify hotspots for new construction in 2023 by looking at statistically signficant concentrations of new building permits.
# # Prepare the data# permits_data <- permits_bg %>%# select(permits_count, geoid10, year) %>%# na.omit()# # # Check for infinite values or other anomalies# if(any(is.infinite(permits_data$permits_count), na.rm = TRUE)) {# stop("Infinite values found in permits_count")# }# # # Create spacetime object# stc <- as_spacetime(permits_data,# .loc_col = "geoid10",# .time_col = "year")# # # Run emerging hotspot analysis# ehsa <- emerging_hotspot_analysis(# x = stc,# .var = "permits_count",# k = 1,# nsim = 3# )# # # Analyze the result# count(ehsa, classification)
4.3 Compare Models
To actually build our model, we have a range to choose from. OLS, or least squares regression, is among the most common; here, we use it as the basis for comparison with a random forest model, which is somewhat more sophisticated. We also considered a Poisson model, although found that it drastically overpredicted for outliers, and we therefore discarded it. As a point of comparison, we built both OLS and random forest models, trained them on data from 2013 through 2021, tested them on 2022 data, dn compared the results for accuracy, overfitting, and generalizability.
4.3.1 OLS
OLS (Ordinary least squares) is a method to explore relationships between a dependent variable and one or more explanatory variables. It considers the strength and direction of these relationships and the goodness of model fit. Our model incorporates three engineered groups of features: space lag, time lag, and distance to 2022. We include this last variable because of the Philadelphia tax abatement policy that led to a significant increase in residential development in the years immediately before 2022 discussed earlier. We used this as a baseline model to compare to Poisson and Random Forest.
Overall, we found that our basic OLS model performed quite well; with a mean absolute error (MAE) of 2.68, it is fairly accurate in prediciting future development. We also note that it overpredicts in most cases which, given our goal of anticipating and preparing for high demand for future development, is preferrable to underpredicting. That said, it still produces a handful of outliers that deviate substantially from the predicted value. As a result, we considered a random forest model to see if it would handle these outliers better.
Show the code
suppressMessages(ggplot(ols_preds, aes(x = permits_count, y = ols_preds)) +geom_point() +labs(title ="Predicted vs. Actual Permits: OLS",subtitle ="2022 Data",x ="Actual Permits",y ="Predicted Permits") +geom_abline() +geom_smooth(method ="lm", se =FALSE, color = palette[3]) +theme_minimal())
Random forest models are superior to OLS in their ability to capture non-linear patterns, outliers, and so forth. They also tend to be less sensitive to multicolinearity. Thus, we considered whether a random forest model would improve on some of the weaknesses of the OLS model. We found that this was indeed the case; the random forest model yielded a MAE of 2.91. Furthermore, the range of absolute error in the model was sizably reduced, with outliers exerting less of an impact on the model.
Show the code
suppressMessages(ggplot(rf_test_preds, aes(x = permits_count, y = rf_test_preds)) +geom_point() +labs(title ="Predicted vs. Actual Permits: RF",subtitle ="2022 Data",x ="Actual Permits",y ="Predicted Permits") +geom_abline() +geom_smooth(method ="lm", se =FALSE, color = palette[3]) +theme_minimal())
Model training, validation, and testing involved three steps. First, we partitioned our data into training, validation, and testing sets. We used data from 2013 through 2021 for initial model training. Next, we evaluated our models’ ability to accurately predict 2022 construction permits using our validation set, which consisted of all permits in 2022. We carried out additional feature engineering and model tuning, iterating based on the results of these training and testing splits. We sought to minimize both the mean absolute error (MAE) of our best model and the distribution of absolute error. Finally, when we were satisfied with the results of our best model, we evaluated it again by training it on all data from 2013 through 2022 and validating it on data from 2023 (all but the last two weeks, which we consider negligible for our purposes), which the model had never “seen” before. As Kuhn and Johnson write in Applied Predictive Modeling (2013), “Ideally, the model should be evaluated on samples that were not used to build or fine-tune the model, so that they provide an unbiased sense of model effectiveness.”
Again, testing confirms the strength of our model; based on 2023 data, our random forest model produces a MAE of 2.19. We note again that the range of model error is relatively narrow.
The constructed boxplot, categorizing observations based on racial composition, indicates that the random forest model generalizes effectively, showcasing consistent and relatively low absolute errors across majority non-white and majority white categories. The discernible similarity in error distributions suggests that the model’s predictive performance remains robust across diverse racial compositions, affirming its ability to generalize successfully.
We find that error is not related to affordability and actually trends downward with percent nonwhite. (This is probably because there is less total development happening there in majority-minority neighborhoods to begin with, so the magnitude of error is less, even though proportionally it might be more.) Error increases slightly with total pop. This makes sense–more people –> more development.
Our analysis reveals that the error is not correlated with affordability and demonstrates a downward trend in conjunction with the percentage of the nonwhite population. This observed pattern may be attributed to the likelihood that majority-minority neighborhoods experience a comparatively lower volume of overall development, thereby diminishing the absolute magnitude of error, despite potential proportional increases. Additionally, there is a slight increase in error with the total population, aligning with the intuitive expectation that higher population figures correspond to more extensive development activities.
Furthermore, we can identify properties with high potential for assemblage, which suggests the ability to accomodate high-density, multi-unit housing.
Show the code
nbs <- filtered_zoning %>%mutate(nb =st_contiguity(geometry))# Create edge list while handling cases with no neighborsedge_list <- tibble::tibble(id =1:length(nbs$nb), nbs = nbs$nb) %>% tidyr::unnest(nbs) %>%filter(nbs !=0)# Create a graph with a node for each row in filtered_zoningg <-make_empty_graph(n =nrow(filtered_zoning))V(g)$name <-as.character(1:nrow(filtered_zoning))# Add edges if they existif (nrow(edge_list) >0) { edges <-as.matrix(edge_list) g <-add_edges(g, c(t(edges)))}# Calculate the number of contiguous neighbors, handling nodes without neighborsn_contiguous <-sapply(V(g)$name, function(node) {if (node %in% edges) {length(neighborhood(g, order =1, nodes =as.numeric(node))[[1]]) } else {1# Nodes without neighbors count as 1 (themselves) }})filtered_zoning <- filtered_zoning %>%mutate(n_contig = n_contiguous)filtered_zoning %>%st_drop_geometry() %>%select(rf_val_preds, n_contig, OBJECTID, CODE) %>%filter(rf_val_preds >10, n_contig >2) %>%arrange(desc(rf_val_preds)) %>%kablerize(caption ="Poorly-Zoned Properties with High Development Risk")
Poorly-Zoned Properties with High Development Risk
rf_val_preds
n_contig
OBJECTID
CODE
868
27.94353
3
1615
ICMX
1548
27.94353
3
2736
IRMX
1587
27.94353
3
2804
IRMX
3420
27.94353
3
6405
RSA5
4667
27.94353
3
9661
RSA5
9169
27.94353
4
20073
ICMX
7517
22.04183
3
16717
RSA5
1768
21.73393
3
3128
IRMX
3640
21.73393
3
6901
ICMX
4957
21.68943
3
10410
ICMX
4958
21.68943
3
10411
RSA5
4959
21.68943
3
10412
ICMX
5245
21.68943
3
11160
RSA5
3934
17.81977
3
7646
ICMX
12326
17.81977
4
25776
RSA5
13578
16.39950
3
27869
IRMX
4460
16.21347
3
9093
RSA5
7726
16.00803
3
17168
ICMX
5088
14.95410
3
10759
IRMX
4512
14.36920
5
9243
IRMX
6014
14.36920
6
13057
ICMX
3041
14.17370
3
5568
ICMX
9842
14.17370
3
21369
RSA5
9843
14.17370
3
21370
ICMX
9845
14.17370
3
21372
RSA5
6645
12.52903
3
14648
ICMX
7280
12.52903
3
16179
RSA5
9912
12.52903
3
21527
ICMX
7833
11.57457
3
17408
RSA5
3957
11.13063
3
7704
IRMX
2138
11.07510
4
3744
IRMX
8143
11.04757
3
18031
RSD3
8656
11.04757
3
19076
RSA3
9409
11.04757
4
20534
RSA2
10175
11.04757
3
22002
RSD1
12605
11.04757
3
26247
RSD1
4146
10.92453
3
8265
IRMX
5108
10.92453
4
10795
IRMX
1536
10.63863
4
2715
ICMX
2422
10.63863
5
4284
IRMX
2941
10.63863
4
5351
RSA5
10490.1
10.63863
3
22527
I3
10810
10.63863
5
23106
ICMX
11135.1
10.63863
8
23678
I3
8 2024 Predictions
Show the code
tmap_mode('plot')tmap_theme(tm_shape(rf_proj_preds) +tm_polygons(col ="rf_proj_preds", border.alpha =0, palette = mono_5_green, style ="fisher", colorNA ="lightgrey", title ="Predicted New Permits") +tm_shape(broad_and_market) +tm_lines(col ="lightgrey") +tm_layout(frame =FALSE),"Projected New Development, 2024")